Since Karma Changes was posted, there have been 20 top level posts. With one exception, all of those posts are presently at positive karma. EDIT: I was using the list on the wiki, which is not up to date. Incorporating the posts between the last one on that list and now, there is a total of 76 posts between Karma Changes and today. This one is the only new data point on negatively rated posts, so it's 2 of 76.
I looked at the 40 posts just prior to Karma Changes, and of the forty, six of them are still negative. It looks like before the change, many times more posts were voted into the red. I have observed that a number of recent posts were in fact downvoted, sometimes a fair amount, but crept back up over time.
Hypothesis: the changes included removing the display minimum of 0 for top-level posts. Now that people can see that something has been voted negative, instead of just being at 0 (which could be the result of indifference), sympathy kicks in and people provide upvotes.
Is this a behavior we want? If not, what can we do about it?
One of the expected effects of the karma change is to make people more cautious about what they put in a top level post. Perhaps this is only evidence of that effect.
Eliezer, how is progress coming on the book on rationality? Will the body of it be the sequences here, but polished up? Do you have an ETA?
Eliezer's posts are always very thoughful, thought provoquing and mind expanding - and I'm not the only one to think this, seeing the vast amounts of karma he's accumulated.
However, reviewing some of the weaker posts (such as high status and stupidity and two aces ), and rereading them as if they hadn't been written by Eliezer, I saw them differently - still good, but not really deserving superlative status.
So I was wondering if Eliezer could write a few of his posts under another name, if this was reasonable, to see if the Karma reaped was very different.
This is a reasonable justification for using a sockpuppet, and I'll try to keep it in mind the next time I have something to write that would not be instantaneously identifiable as me.
I'm one of the 5-10.
There is a depth to "this is an Eliezer agument, part of a rich and complicated mental world with many different coherent aspects to it" that is lacking in "this is a random post on a random subject". In the first case, you are seeing a facet of larger wisdom; in the second, just an argument to evaluate on merits.
I thought of a voting tip that I'd like to share: when you are debating someone, and one of your opponent's comment gets downvoted, don't let it stay at -1. Either vote it up to 0, or down to -2, otherwise your opponent might infer that you are the one who downvoted it. Someone accused me of this some time ago, and I've been afraid of it happening again ever since.
It took a long time for this countermeasure to occur to me, probably because the natural reaction when someone accuses you of unfair downvoting is to refrain from downvoting, while the counterintuitive, but strategically correct response is to downvote more.
LW became more active lately, and grew old as experience, so it's likely I won't be skimming "recent comments" (and any comments) systematically anymore (unless I miss the fun and change my mind, which is possible). Reliably, I'll only be checking direct replies to my comments or private messages (red envelope).
A welcome feature to alleviate this problem would be an aggregator for given threads: functionality to add posts, specific comments and users in a set of items to be subscribed on. Then, all comments on the subscribed posts (or all comments within depth k from the top-level comments), and all comments within the threads under subscribed comments should appear together as "recent comments" do now. Each comment in this stream should have links to unsubscribe from the subscribed item that caused this comment to appear in the stream, or to add an exclusion on the given thread within another subscribed thread. (Maybe, being subscribed to everything, including new items, by default, is the right mode, but with ease of unsubscribing.)
This may look like a lot, but right now, there is no reading load-reducing functionality, so as more people start actively commenting, less people will be able to follow.
While the LW voting system seems to work, and it is possibly better than the absence of any threshold, my experience is that the posts that contain valuable and challenging content don't get upvoted, while the most upvotes are received by posts that state the obvious or express an emotion with which readers identify.
I feel there's some counterproductivity there, as well as an encouragement of groupthink. Most significantly, I have noticed that posts which challenge that which the group takes for granted get downvoted. In order to maintain karma, it may in fact be important not to annoy others with ideas they don't like - to avoid challenging majority wisdom, or to do so very carefully and selectively. Meanwhile, playing on the emotional strings of the readers works like a charm, even though that's one of the most bias-encouraging behaviors, and rather counterproductive.
I find those flaws of some concern for a site like this one. I think the voting system should be altered to make upvoting as well as downvoting more costly. If you have to pick and choose what comments and articles to upvote/downnvote, I think people will be voting with more reason.
There are various ways to make voti...
If I understand the Many-Worlds Interpretation of quantum mechanics correctly, it posits that decoherence takes place due to strict unitary time-evolution of a quantum configuration, and thus no extra collapse postulate is necessary. The problem with this view is that it doesn't explain why our observed outcome frequencies line up with the Born probability rule.
Scott Aronson has shown that if the Born rule doesn't hold, then quantum computing allows superluminal signalling and the rapid solution of PP)-complete problems. So we could adopt "no superluminal signalling" or "no rapid solutions of PP-complete problems" as an axiom and this would imply the Born probability rule.
I wanted to ask of those who have more knowledge and have spent longer thinking about MWI: is the above an interesting approach? What justifications could exist for such axioms? (...maybe anthropic arguments?)
ETA: Actually, Aronson showed that in a class of rules equating probability with the p-norm, only the 2-norm had the properties I listed above. But I think that the approach could be extended to other classes of rules.
An ~hour long talk with Douglass Hofstadter, author of Godel, Escher, Bach.
Titled: Analogy as the Core of Cognition
Fun sneaky confidence exercise (reasons why exercise is fun and sneaky to be revealed later):
Please reply to this comment with your probability level that the "highest" human mental functions, such as reasoning and creative thought, operate solely on a substrate of neurons in the physical brain.
<.05
I am no cognitive scientist, but I believe some of my "thinking" takes place outside of brain (elsewhere in my body) and I am almost certain some of it takes place on my paper and computer.
When I signed up for cryonics, I opted for whole body preservation, largely because of this concern. But I would imagine that even without the body, you could re-learn how to move and coordinate your actions, although it might take some time. And possibly a SAI could figure out what your body must have been like just from your brain, not sure.
Now recently I have contracted a disease which will kill most of my motor neurons. So the body will be of less value and I may change to just the head.
The way motor neurons work is there is an upper motor neuron (UMN) which descends from the motor cortex of the brain down into the spinal cord; and there it synapses onto a lower motor neuron (LMN) which projects from the spinal cord to the muscle. Just 2 steps. However actually the architecture is more complex, the LMNs receive inputs not only from UMNs but from sensory neurons coming from the body, indirectly through interneurons that are located within the spinal cord. This forms a sort of loop which is responsible for simple reflexes, but also for stable standing, positioning etc. Then there other kinds of neurons that descend from the brain into the spinal cord, including from the limbic system, the center of emotion. For some reason your spinal cord needs to know something about your emotional state in order to do its job, very odd.
A query to Unknown, with whom I have this bet going:
Unknown: When someone designs a superintelligent AI (it won't be Eliezer), without paying any attention to Friendliness (the first person who does it won't), and the world doesn't end (it won't), it will be interesting to hear Eliezer's excuses.
EY: Unknown, do you expect money to be worth anything to you in that situation? If so, I'll be happy to accept a $10 payment now in exchange for a $1000 inflation-adjusted payment in that scenario you describe.
I recently found within myself a tiny shred of an...
This is actually a damned good question:
http://www.scientificblogging.com/mark_changizi/why_doesn%E2%80%99t_size_matter%E2%80%A6_brain
To re-iterate a request from Normal Cryonics: I'm looking for links to the best writing out there against cryonics, especially anything that addresses the plausibility of reanimation, the more detailed the better.
I'm not looking for new arguments in comments, just links to what's already "out there". If you think you have a good argument against cryonics that hasn't already been well presented, please put it online somewhere and link to it here.
I've created a rebuttal to komponisto's misleading Amanda Knox post, but don't have enough karma to create my own top-level post. For now, I've just put it here:
If you actually want to debate this, we could do so in the comments section of my post, or alternatively over in the Richard Dawkins forum.
(Though since you say "my intent is merely to debunk komponisto's post rather than establish Amanda's guilt", I'm suspicious. See Against Devil's Advocacy.)
Make sure you've read my comments here in addition to my post itself.
There is one thing I agree with you about, and that is that this statement of mine
these two things constituting so far as I know the entirety of the physical "evidence" against the couple
is misleading. The misleading part is the phrase "so far as I know", which has been interpreted by people who evidently did not read my preceding survey post to mean that I had not heard about all the other alleged physical evidence. I didn't consider this interpretation because I was assuming that my readers had read both True Justice and Friends of Amanda, knew from my previous post that I had obviously read them both myself, and would understand my statement for what it was -- a dismissal of the rest of the so-called "evidence". However, in retrospect, I should have foreseen this misunderstand...
We are status oriented creatures especially with regard to social activities. Science is one of those social activities, so it is to be expected that science is infected with status seeking. However it is also one of the more efficient ways we have of getting truths, so it must be doing some things correctly. I think that it may have some ideas that surround it that reduce the problems of it being a social enterprise.
One of the problems is the social stigma of being wrong, which most people on the edge of knowledge probably are. Being wrong does not signal...
I seem to be entering a new stage in my 'study of Less Wrong beliefs' where I feel like I've identified and assimilated a large fraction of them, but am beginning to notice a collusion of contradictions. This isn't so surprising, since Less Wrong is the grouped beliefs of many different people, and it's each person's job to find their own self-consistent ribbon.
But just to check one of these -- Omega's accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?
You can get around the universe being ...
IAWYC except, of course, for this:
In principle our best physics tells us that determinism is just false as a metaphysics.
As said above and elsewhere, MWI is perfectly deterministic. It's just that there is no single fact of the matter as to which outcome you will observe from within it, because there's not just one time-descendant of you.
According to some people we here at less wrong are good at determining the truth. Other people are notoriously not.
I don't know that Less Wrong is the appropriate venue for this, but I have felt for some time that I trust the truth-seeking capability here and that it could be used for something more productive than arguments about meta-ethics (no offense to the meta-ethicists intended). I also realize that people are fairly supportive of SIAI here in terms of giving spare cash away, but I feel like the community would be a good jumping-off point for a po...
Bleg for assistance:
I’ve been intermittently discussing Bayes’ Theorem with the uninitiated for years, with uneven results. Typically, I’ll give the classic problem:
3,000 people in the US have Sudden Death Syndrome. I have a test that is 99% accurate; that is, it will wrong on any given person one percent of the time. Steve tests positive for SDS. What is the chance that he has it?
Afterwards, I explain the answer by comparing the false positives to the true positives. And, then I see the Bayes’ Theorem Look, which conveys to me this: "I know Mayne’s g...
For this specific case, you could try asking the analogous question with a higher probability value. E.g. "if you’ve got a one-in-two DNA match on a suspect, does that mean it’s one-in-two that you’ve got that dude’s DNA?". Maybe you can have some graphic that's meant to represent a several million people, with half of the folks colored as positive matches. When they say "no, it's not one-in-two", you can work your way up to the three million case by showing pictures displaying what the estimated amount of hits would be for a 1 to 3, 1 to 5, 1 to 10, 1 to 100, 1 to 1000 etc. case.
In general, try to use examples that are familiar from everyday life (and thus don't feel like math). For the Bayes' theorem introduction, you could try "a man comes to a doctor complaining about a headache. The doctor knows that both the flu and brain cancer can cause headaches. If you knew nothing else about the case, which one would you think was more likely?" Then, after they've (hopefully) said that the man is more likely to be suffering of a flu, you can mention that brain cancer is much more likely to cause a headache than a flu is, but because flu is so much more common, their answer was nevertheless the correct one.
Other good examples:
Most car accidents occur close to people's homes, not because it's more dangerous close to home, but because people spend most of their driving time close to their homes.
Most pedestrians who get hit by cars get hit at crosswalks, not because it's more dangerous at a crosswalk, but because most people cross at crosswalks.
Most women who get raped get raped by people they know, not because strangers are less dangerous than people they know, but because they spend more time around people they know.
Is there a way to get a "How am I doing?" review or some sort of mentor that I can ask specific questions? The karma feedback just isn't giving me enough detail, but I don't really want to pester everyone every time I have a question about myself.
The basic problem I need to solve is this: When I read an old post, how do I know I am hearing what I am supposed to be hearing? If I have a whole list of nitpicky questions, where do I go? If a question of mine goes unanswered, what do I do?
I don't know anyone here. I don't have the ability to stroll by someone and ask them for help.
So I actually have this idea of doing a series (or just a couple) of top level posts about rationality and basketball (or sports in general). I'm partly holding off because I'm worried that the rationality aspects are too basic and obvious and no one else will care about the basketball parts.
But sports are great for talking about rationality because there is never an ambiguity about the results of our predictions and because there are just bucket-loads of data to work with. On the other hand, a surprising about of irrationality can be still be found even in professional leagues where being wrong means losing money.
Anyway, to answer your question: You get two kinds of information from play at the beginning of the game: First, you get information about what the final score will be from the points that have been scored already. So if my team is up 10 points the other team needs to score 11 more points over the remainder of the game in order to win. The less time remaining in the game the more significant this gets. The other kind of information is information about how the teams are playing that day. But if a team is playing significantly better or worse than you would have predicte...
I just finished reading Jaron Lanier's One-Half of a Manifesto for the second time.
The first time I read it must have been three years ago, and although I felt there were several things wrong with it, I hadn't come to what is now an inescapable conclusion for me: Jaron Lanier is one badly, badly confused dude.
I mean, I knew people could be this confused, but those people are usually postmodernists or theologians or something, not smart computer scientists. Honestly, I find this kind of shocking, and more than a little depressing.
I am becoming increasingly disinclined to stick out the grad school thing; it's not fun anymore, and really, a doctorate in philosophy is not going to let me do anything substantially different in kind from what I'm doing now once I have it. Nor will it earn me barrels of money or do immense social good, so if it's not fun, I'm kinda low on reasons to stay. I haven't outright decided to leave, but you know what they say. I'm putting out tentative feelers for what else I'd do if I do wind up abandoning ship. Can anyone think of a use for me - ideally one that doesn't require me to eat my savings while I pick up other credentials first?
Stem cell experts say they believe a small group of scientists is effectively vetoing high quality science from publication in journals.
Mind-killing taboo topic that it is, I'd like to have a comment thread about LW readers' thoughts about US politics.
I recall EY commenting at some point that the way to make political progress is to convert intractable political problems into tractable technical problems. I think this kind of discussion would be more interesting and more profitable than a "traditional" mind-killing political debate.
It might be interesting, for example, to develop formal rationalist political methods. Some principles might include:
How about per-capita post scoring?
Why not divide a post's number of up-votes by the number of unique logged-in people who have viewed it? This would correct for the distortion of scores caused by varying numbers of readers. Some old stuff is very good but not read much, and scores are in general inflating as the Less Wrong population grows.
I think such a change would be orthogonal to karma accounting; I'm only suggesting a change in the number displayed next to each post.
I'd like to draw people's attention to a couple of recent "karma anomalies". I think these show a worrying tendency for arguments that support the majority LW opinion to accumulate karma regardless of their actual merits.
Another content opinion question: What and where is considered appropriate to discuss personal progress/changes/introspection regarding Rationality? I assume that LessWrong is not to be used for my personal Rationality diary.
The reason I ask is that the various threads discussing my beliefs seem to pick up some interest and they are very helpful to me personally.
I suppose the underlying question is this: If you had to choose topics for me to write about, what would they be? My specific religious beliefs have been requested by a few people, so that is given. Is there anything else? If I were to talk about my specific beliefs, what is the best way to do so?
What is the correct term for the following distinction:
Scenario A: The fair coin has 50% chance to land heads.
Scenario B: The unfair coin has an unknown chance to land heads, so I assign it a 50% chance to get heads until I get more information.
If A flips up heads it won't change the 50%. If B flips up heads it will change the 50%. This makes Scenario A more [something] than Scenario B, but I don't know the right term.
Daniel Varga wrote
In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical.
What I started wondering about when I began assimilating this idea of merging, copying and deleting identities, is what kind of legal/justice system could we depend upon if this was possible to enforce non-criminal behavior?
Right now we can threaten to punish people by restricting their freedom over a period of time that is significant with respect to the length of their lifetime. H...
Measure your risk intelligence, a quiz in which you answer questions on a confidence scale from 0% to 100% and your calibration is displayed on a graph.
Obviously a linear probability scale is the Wrong Thing - if we were building it, we'd use a deciban scale and logarithmic scoring - but interesting all the same.
I may be stretching the openness of the thread a little here, but I have an interesting mechanical engineering hobbyist project, and I have no mechanical aptitude. I figure some people around here might, and this might be interesting to them.
The Avacore CoreControl is a neat little device, based on very simple mechanical principles, that lets you exercise for longer and harder than you otherwise could, by cooling down your blood directly. It pulls a slight vacuum on your hand, and directly applies ice to the palm. The vacuum counteracts the vasocontriction...
We all know politics is the mind-killer, but it sometimes comes up anyway. Eliezer maintains that it is best to start with examples from other perspectives, but alas there is one example of current day politics which I do not know how to reframe: the health care debate.
As far as I can tell, almost every provision in the bill is popular, but the bill is not. This seems to be primarily because Republicans keep lying about it (I couldn't find a good link but there was a clip on the daily show of Obama saying "I can't find a reputable economist who agre...
Anyone willing to give some uneducated fool a little math coaching? I'm really just starting with math and I probably shouldn't already get into this stuff before reading up more, but it's really bothering me. I came across this page today: http://wiki.lesswrong.com/wiki/Prior_odds
My question, how do you get a likelihood ratio of 11:1 in favor of a diamond? I'm getting this: .88/(.88+7.92)=.1 thus 10% probability for a beep to be a box containing a diamond? Since the diamond-detector is 88% likely to beep on that 1 box and 8% likely to beep on the 99 boxes...
Would there be interest in a more general discussion forum for rationalists, or does one already exist? I think it would be useful to test the discussion of politics, religion, entertainment, and other topics without ruining lesswrong. It could attract a wider audience and encourage current lurkers to post.
This is sort of off-topic for LW, but I recently came across a paper that discusses Reconfigurable Asynchronous Logic Automata, which appears to be a new model of computation inspired by physics. The paper claims that this model yields linear-time algorithms for both sorting and matrix multiplication, which seems fairly significant to me.
Unfortunately the paper is rather short, and I haven't been able to find much more information about it, but I did find this Google Tech Talks video in which Neil Gershenfeld discusses some motivations behind RALA.
It takes O(n) memory units just to store a list of size n. Why should computers have asymptotically more memory units than processing units? You don't get to assume an infinitely parallel computer, but O(n)-parallel is only reasonable.
My first impression of the paper is: We can already do this, it's called an FPGA, and the reason we don't use them everywhere is that they're hard to program for.
I've been reading Probability Theory by E.T. Jaynes and I find myself somewhat stuck on exercise 3.2. I've found ways to approach the problem that seem computationally intractable (at least by hand). It seems like there should be a better solution. Does anyone have a good solution to this exercise, or even better, know of collection of solutions to the exercises in the book?
At this point, if you have a complete solution, I'd certainly settle for vague hints and outlines if you didn't want to type the whole thing. Thanks.
Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.
May I?
I wonder if physicists would admit the effect of genealogy on their interpretation of QM?
People who ask physicists their interpretation of QM: next time, if the physicist admits controversy, ask about genealogy and other forms of epistemic luck.
While reading old posts and looking for links to topics in upcoming drafts I have noticed that the Tags are severely underutilized. Is there a way to request a tag for a particular post?
Example: Counterfactual has one post and it isn't one of the heavy hitters on the subject.
Hi LessWrongers,
I'm aware that Newcomb's problem has been discussed a lot around here. Nonetheless, I'm still surprised that 1-boxing seems to be the consensus view here, contrary to the concensus view. Can someone point to the relevant knockdown argument? (I found Newcomb's Problem and Regret of Rationality but the only argument therein seems to be that 1-boxers get what they want, and that's what makes 1-boxing rational. Now, getting what one wants seems to be neither necessary nor sufficient, because you should get it because of your rational choice,...
I just read Outliers and I'm curious -- is there anything that would have taken 10000 hours in the EEA that would support Gladwell's "rule"? Is there anything else in neurology/our understanding of the brain that would make the idea that this is the amount of practice that's needed to succeed in something make sense?
Graphene transistors promise 100GHz speeds
http://arstechnica.com/science/2010/02/graphene-fets-promise-100-ghz-operation.ars
100-GHz Transistors from Wafer-Scale Epitaxial Graphene
Here's another one. When reading wikipedia on Chaitin's constant, I came across an article by Chaitin from 1956 (EDIT: oops, it's 2006) about the consequences of the constant (and its uncomputability) on the philosophy of math, that seems to me to just be completely wrongheaded, but for reasons I can't put my finger on. It really strikes the same chords in me that a lot of inflated talk about Godel's Second Incompleteness theorem strikes. (And indeed, as is obligatory, he mentions that too.) I searched on the title but didn't find any refutations. I wonder if anyone here has any comments on it.
What probability do you assign for it being possible to send information backwards in time, over any time scale?
I was thinking about what general, universal utility would look like. I managed to tie myself into an interesting mental knot.
I started with: Things occurring as intelligent agents would prefer.
If preferences conflict, weight preferences by the intelligence of the preferring agent.
Define intelligent agents as optimization processes.
Define relative intelligences as the relative optimization strengths of the processes
Define a preference as something an agent optimizes for.
Then, I realized that my definition was a descriptive prediction of events.
Many-Worlds explained, with pretty pictures.
http://kim.oyhus.no/QM_explaining_many-worlds.html
The story about how I deduced the Many-Worlds interpretation, with pictures instead of formulas.
Enjoy!
I recently met someone investigating physics for the first time, and they asked what I thought of Paul Davies' book The Mind of God. I thought I'd post my response here, not because of my views on Davies, but for the brief statement of outlook trying to explain the position from which I'd judge him.
...The truth is that I don't remember a thing of what he says in the book. I might look it up tomorrow and see if I am reminded of any specific reactions I had. From what I remember of his outlook, I don't think it is an unusual one for a philosophically minded
What happens when you comment on an old pre-LW imported Overcoming Bias post? Does your comment go to the bottom or the top?
Just curious.
Obviously, the thing to do is to reply to an established comment so that the order of comments is maintained. Does voting on old comments now change their order? If so, I should stop doing that..
Best yet, it seems any new comments you want to make might best be exported to an open thread? For historical authenticity of the post and its original comments.
Quantum Criticality in an Ising Chain: Experimental Evidence for Emergent E8 Symmetry
Ah, emergence...
Popular summary http://plus.maths.org/latestnews/jan-apr10/e8/index.html
Random thought:
If someone objects to cryonics because they are worried they wouldn't be the same person on the other side believes in an eternal resurrection with a "new body" as per some Christian belief systems they should have the same objection. I would expect a response akin to, "God will make sure it is me," but the correlation still amuses me.
Here's a long, somewhat philosophical discussion I had on Facebook, much of which is about cryonics. The major participants are me (Tanner Swett, pro-cryonics), Ian (a Less Wronger, pro-cryonics), and Cameron (anti-cryonics). The discussion pretty much turned into "There is no science behind cryonics!" "Yes there is!" "No there isn't!"
As you can see, nobody changed their minds after the discussion, showing that whatever the irrationality was, we were unable to identify it and repair it.
My mom saw a mouse running around our kitchen a couple of days ago, so she had my father put out some traps. The only traps he had were horrible glue traps. I was having trouble sleeping, so I got out of bed to play video games, and I heard a noise coming from the kitchen. A mouse (or possibly a rat, I don't know) was stuck to one of the traps. Long story short, I put it out of its misery by drowning it in the toilet.
I feel sick.
Vegetative state patients can respond to questions
I have a feeling that the stream-of-consciousness thing that's been popping up since MrHen did it is going to exploit, to the point of indecency, how honest his was. He almost gave it too good of a reputation.
I don't know if this is the place for this or if it has been discussed before [if it has would someone be willing to provide links?], but what is the general consensus around here on social psychology, specifically the Symbolic Interactionist approach. To rephrase to mean what I'm looking for by asking this, what do you all think about the idea that we're social animals shaped purely by our environments, that there's no non-social self, and that reality is shaped by our perspectives of reality?
To throw in another thing I've been curious about, does anyone ...
Not sure how interesting this is to most people here, but I came across a http://phm.cba.mit.edu/papers/09.11.POPL.pdf published by the MIT Center for Bits and Atoms that discusses http://rala.cba.mit.edu/index.html, an apparently new model of computation inspired by physics. The paper claims that this model of computation yields linear algorithms for sorting and for matrix multiplication, which seems fairly significant to me.
Unfortunately, the paper is rather short on details, and I can't seem to find much else about it. I did find part of a http://www.yo...
What is the kind of useful information/ideas that one can extract from a super intelligent AI kept confined in a virtual world without giving it any clues on how to contact us on the outside?
I'm asking this because a flaw that i see in the AI in a box experiment is that the prisoner and the guard have a language by which they can communicate. If the AI is being tested in a virtual world without being given any clues on how to signal back to humans, then it has no way of learning our language and persuading someone to let it loose.
Physicist Discovers How to Teleport Energy
http://www.technologyreview.com/blog/arxiv/24759/
Energy-Entanglement Relation for Quantum Energy Teleportation
Does the MWI make rationality irrelevant? All choices are done in some universe (because there's atleast one extremely improbable quantum event which arranges the particles in your brain to make any choice). Therefore, you will make the correct choice in atleast 1 universe.
Of course, this leads to the problems of continuing conscious experience (or the lack of), and whether you should care of what happens to you in all the possible future worlds that you will exist in.
Where are the new monthly threads when I need them? A pox on the +11 EDT zone!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.